OpenShift Origin 3.7 : Add Nodes to a Cluster
2018/02/07 |
Add Nodes to an existing Cluster.
This example is based on the environment like follows.
On this tutotial, add a Compute Node [node03.srv.world (10.0.0.53)] as an example. -----------+-----------------------------------------------------------+------------ |10.0.0.30 |10.0.0.51 |10.0.0.52 +----------+-----------+ +----------+-----------+ +----------+-----------+ | [ dlp.srv.world ] | | [ node01.srv.world ] | | [ node02.srv.world ] | | (Master Node) | | (Compute Node) | | (Compute Node) | | (Compute Node) | | | | | +----------------------+ +----------------------+ +----------------------+ |
[1] | On the target Node to be added, Create the same user for Cluster administration with other nodes and also grant root privileges to him. |
[root@node03 ~]#
useradd origin [root@node03 ~]# passwd origin [root@node03 ~]# echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift [root@node03 ~]# chmod 440 /etc/sudoers.d/openshift # if Firewalld is running, allow SSH [root@node03 ~]# firewall-cmd --add-service=ssh --permanent [root@node03 ~]# firewall-cmd --reload |
[2] | On the target Node to be added, install OpenShift Origin 3.7 repository and Docker. Next, create a volume group for Docker Direct LVM to setup LVM Thinpool like follows. |
[root@node03 ~]#
[root@node03 ~]# yum -y install centos-release-openshift-origin37 docker vgcreate vg_origin01 /dev/sdb1 Volume group "vg_origin01" successfully created [root@node03 ~]# echo VG=vg_origin01 >> /etc/sysconfig/docker-storage-setup [root@node03 ~]# systemctl start docker [root@node03 ~]# systemctl enable docker |
[3] | On Master Node, login with an admin user of the Cluster and copy SSH public-key to the new Node. |
[origin@dlp ~]$
vi ~/.ssh/config # add new node
Host dlp
Hostname dlp.srv.world
User origin
Host node01
Hostname node01.srv.world
User origin
Host node02
Hostname node02.srv.world
User origin
Host node03
Hostname node03.srv.world
User origin
[origin@dlp ~]$ ssh-copy-id node03 |
[4] | On Master Node, login with an admin user of the Cluster and run Ansible Playbook for scaleout the Cluster. For [/etc/ansible/hosts] file, Use the latest one when you setup or scaleout the Cluster. |
# add into OSEv3 section [OSEv3:children] masters nodes new_nodes [OSEv3:vars] ansible_ssh_user=origin ansible_become=true openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/.htpasswd'}] openshift_master_default_subdomain=apps.srv.world openshift_docker_insecure_registries=172.30.0.0/16 [masters] dlp.srv.world openshift_schedulable=true containerized=false [etcd] dlp.srv.world [nodes] dlp.srv.world openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node01.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_schedulable=true node02.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'west'}" openshift_schedulable=true # add definition for new node [new_nodes] node03.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'south'}" openshift_schedulable=true[origin@dlp ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml ................ ................ PLAY RECAP ********************************************************************* dlp.srv.world : ok=43 changed=2 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0 node01.srv.world : ok=29 changed=2 unreachable=0 failed=0 node02.srv.world : ok=29 changed=2 unreachable=0 failed=0 node03.srv.world : ok=179 changed=68 unreachable=0 failed=0 INSTALLER STATUS *************************************************************** Initialization : Complete Node Install : Complete # show status [origin@dlp ~]$ oc get nodes NAME STATUS AGE VERSION dlp.srv.world Ready 1h v1.7.6+a08f5eeb62 node01.srv.world Ready 1h v1.7.6+a08f5eeb62 node02.srv.world Ready 1h v1.7.6+a08f5eeb62 node03.srv.world Ready 1m v1.7.6+a08f5eeb62 |
[5] | After finishing to add new Nodes, Open [/etc/ansible/hosts] again and move new definitions to existing [nodes] section like follows. |
# remove new_nodes [OSEv3:children] masters nodes new_nodes [OSEv3:vars] ansible_ssh_user=origin ansible_become=true openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/.htpasswd'}] openshift_master_default_subdomain=apps.srv.world openshift_docker_insecure_registries=172.30.0.0/16 [masters] dlp.srv.world openshift_schedulable=true containerized=false [etcd] dlp.srv.world [nodes] dlp.srv.world openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node01.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_schedulable=true node02.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'west'}" openshift_schedulable=true node03.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'south'}" openshift_schedulable=true # remove these [new_nodes] section and move definition to [nodes] section above [new_nodes] node03.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'south'}" openshift_schedulable=true |
[6] | By the way, if you'd like to add Master Node, it's possible to configure the same way like follows. |
[OSEv3:children] masters nodes new_masters ..... ..... [new_masters] master02.srv.world openshift_schedulable=true containerized=false[origin@dlp ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/scaleup.yml |